vine copula
Trunc-Opt vine building algorithms
Pfeifer, Dániel, Kovács, Edith Alice
Vine copula models have become highly popular and practical tools for modelling multivariate probability distributions due to their flexibility in modelling different kinds of dependences between the random variables involved. However, their flexibility comes with the drawback of a high-dimensional parameter space. To tackle this problem, truncated vine copulas were introduced by Kurowicka (2010) (Gaussian case) and Brechmann and Czado (2013) (general case). Truncated vine copulas contain conditionally independent pair copulas after the truncation level. So far, in the general case, truncated vine constructing algorithms started from the lowest tree in order to encode the largest dependences in the lower trees. The novelty of this paper starts from the observation that a truncated vine is determined by the first tree after the truncation level (see Kovács and Szántai (2017)). This paper introduces a new score for fitting truncated vines to given data, called the Weight of the truncated vine. Then we propose a completely new methodology for constructing truncated vines. We prove theorems which motivate this new approach. While earlier algorithms did not use conditional independences, we give algorithms for constructing and encoding truncated vines which do exploit them. Finally, we illustrate the algorithms on real datasets and compare the results with well-known methods included in R packages. Our method generally compare favorably to previously known methods.
- Europe > Hungary > Budapest > Budapest (0.04)
- North America > United States > Wisconsin (0.04)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
Mixed vine copulas as joint models of spike counts and local field potentials
Concurrent measurements of neural activity at multiple scales, sometimes performed with multimodal techniques, become increasingly important for studying brain function. However, statistical methods for their concurrent analysis are currently lacking. Here we introduce such techniques in a framework based on vine copulas with mixed margins to construct multivariate stochastic models. These models can describe detailed mixed interactions between discrete variables such as neural spike counts, and continuous variables such as local field potentials. We propose efficient methods for likelihood calculation, inference, sampling and mutual information estimation within this framework. We test our methods on simulated data and demonstrate applicability on mixed data generated by a biologically realistic neural network. Our methods hold the promise to considerably improve statistical analysis of neural data recorded simultaneously at different scales.
- Europe > Italy (0.04)
- South America > Colombia (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
Appendix A Algorithm
The algorithm for training an IGC model is described in Algorithm 1.Algorithm 1: Training algorithm GMMN models are trained using the Adam optimizer with standard values. Table 3 presents the results. The GAN achieves the best result for the Swiss Roll data. Gaussians, the TLL2 copula shows the lowest NLL, closely followed by the IGC and GMMN copula. Here the GAN shows the worst performance of all models.
Throwing Vines at the Wall: Structure Learning via Random Search
Vatter, Thibault, Nagler, Thomas
Vine copulas offer flexible multivariate dependence modeling and have become widely used in machine learning, yet structure learning remains a key challenge. Early heuristics like the greedy algorithm of Dissmann are still considered the gold standard, but often suboptimal. We propose random search algorithms that improve structure selection and a statistical framework based on model confidence sets, which provides theoretical guarantees on selection probabilities and a powerful foundation for ensembling. Empirical results on several real-world data sets show that our methods consistently outperform state-of-the-art approaches.
- Europe > Austria > Vienna (0.14)
- North America > United States > California (0.05)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- (3 more...)
- North America > United States (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)